-
-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a Map for createTransformer memoization #331
base: master
Are you sure you want to change the base?
Conversation
b5736fe
to
d57ba84
Compare
d57ba84
to
e1f06c6
Compare
@taj-p PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I like how this prevents mutating the object, prevents the creation of another map, conserves memory, and uses a Map instead of a dynamic object.
With this approach, we are retaining the input objects in the cache Map directly. With the previous approach we only kept the string IDs in the cache object. I don't think this is an issue, because the input objects are already retained by the |
@mweststrate PTAL when you have some time, thanks! 🙏 |
@urugator if you have thoughts on this |
LGTM |
createTransformer()
memoizes object transformations by creating a key for each possible input of typeA
, and then stores a mapping of that key to an object of typeB
in the context variableviews
. The key is stored on eachA
as a non-enumerable property named$transformId
, so that each object can be looked up in theviews
cache later.If we change
views
to be aMap
instead of a plain object, we can get rid of the need for the IDs completely.With this new impl, we still treat string and number inputs by value, and we still treat object inputs by identity. This comes down to the hashing semantics of Map.
This saves a small amount of memory in the unique memo IDs that were created for each input, and also some memory on each input object, which was modified to add the
$transformId
property. This may also have a performance benefit, because we aren't modifying the hidden shape of the inputs any more.